10 research outputs found

    Automatic Detection of Wrecked Airplanes from UAV Images

    Get PDF
    Searching the accident site of a missing airplane is the primary step taken by the search and rescue team before rescuing the victims. However, due to the vast exploration area, lack of technology, no access road, and rough terrain make the search process nontrivial and thus causing much delay in handling the victims. Therefore, this paper aims to develop an automatic wrecked airplane detection system using visual information taken from aerial images such as from a camera. A new deep network is proposed to distinguish robustly the wrecked airplane that has high pose, scale, color variation, and high deformable object. The network leverages the last layers to capture more abstract and semantics information for robust wrecked airplane detection. The network is intertwined by adding more extra layers connected at the end of the layers. To reduce missing detection which is crucial for wrecked airplane detection, an image is then composed into five patches going feed-forwarded to the net in a convolutional manner. Experiments show very well that the proposed method successfully reaches AP=91.87%, and we believe it could bring many benefits for the search and rescue team for accelerating the searching of wrecked airplanes and thus reducing the number of victims

    Synthetically Supervised Feature Learning for Scene Text Recognition

    Get PDF
    We address the problem of image feature learning for scene text recognition. The image features in the state-of-the-art methods are learned from large-scale synthetic image datasets. However, most meth- ods only rely on outputs of the synthetic data generation process, namely realistically looking images, and completely ignore the rest of the process. We propose to leverage the parameters that lead to the output images to improve image feature learning. Specifically, for every image out of the data generation process, we obtain the associated parameters and render another “clean” image that is free of select distortion factors that are ap- plied to the output image. Because of the absence of distortion factors, the clean image tends to be easier to recognize than the original image which can serve as supervision. We design a multi-task network with an encoder-discriminator-generator architecture to guide the feature of the original image toward that of the clean image. The experiments show that our method significantly outperforms the state-of-the-art methods on standard scene text recognition benchmarks in the lexicon-free cate- gory. Furthermore, we show that without explicit handling, our method works on challenging cases where input images contain severe geometric distortion, such as text on a curved path

    Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes

    Full text link
    Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.Comment: To appear in ECCV 201

    A robust arbitrary text detection system for natural scene images

    No full text
    Text detection in the real world images captured in unconstrained environment is an important yet challenging computer vision problem due to a great variety of appearances, cluttered background, and character orientations. In this paper, we present a robust system based on the concepts of Mutual Direction Symmetry (MDS), Mutual Magnitude Symmetry (MMS) and Gradient Vector Symmetry (GVS) properties to identify text pixel candidates regardless of any orientations including curves (e.g. circles, arc shaped) from natural scene images. The method works based on the fact that the text patterns in both Sobel and Canny edge maps of the input images exhibit a similar behavior. For each text pixel candidate, the method proposes to explore SIFT features to refine the text pixel candidates, which results in text representatives. Next an ellipse growing process is introduced based on a nearest neighbor criterion to extract the text components. The text is verified and restored based on text direction and spatial study of pixel distribution of components to filter out non-text components. The proposed method is evaluated on three benchmark datasets, namely, ICDAR2005 and ICDAR2011 for horizontal text evaluation, MSRA-TD500 for non-horizontal straight text evaluation and on our own dataset (CUTE80) that consists of 80 images for curved text evaluation to show its effectiveness and superiority over existing methods. (C) 2014 Elsevier Ltd. All rights reserved
    corecore